首页> 外文OA文献 >Linearized Alternating Direction Method with Parallel Splitting and Adaptive Penalty for Separable Convex Programs in Machine Learning
【2h】

Linearized Alternating Direction Method with Parallel Splitting and Adaptive Penalty for Separable Convex Programs in Machine Learning

机译:平行分裂和线性化的交替方向法   机器学习中可分凸性程序的自适应惩罚

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Many problems in machine learning and other fields can be (re)for-mulated aslinearly constrained separable convex programs. In most of the cases, there aremultiple blocks of variables. However, the traditional alternating directionmethod (ADM) and its linearized version (LADM, obtained by linearizing thequadratic penalty term) are for the two-block case and cannot be naivelygeneralized to solve the multi-block case. So there is great demand onextending the ADM based methods for the multi-block case. In this paper, wepropose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solvemulti-block separable convex programs efficiently. When all the componentobjective functions have bounded subgradients, we obtain convergence resultsthat are stronger than those of ADM and LADM, e.g., allowing the penaltyparameter to be unbounded and proving the sufficient and necessary conditions}for global convergence. We further propose a simple optimality measure andreveal the convergence rate of LADMPSAP in an ergodic sense. For programs withextra convex set constraints, with refined parameter estimation we devise apractical version of LADMPSAP for faster convergence. Finally, we generalizeLADMPSAP to handle programs with more difficult objective functions bylinearizing part of the objective function as well. LADMPSAP is particularlysuitable for sparse representation and low-rank recovery problems because itssubproblems have closed form solutions and the sparsity and low-rankness of theiterates can be preserved during the iteration. It is also highlyparallelizable and hence fits for parallel or distributed computing. Numericalexperiments testify to the advantages of LADMPSAP in speed and numericalaccuracy.
机译:机器学习和其他领域中的许多问题可以(线性)模拟为线性约束的可分离凸程序。在大多数情况下,有多个变量块。但是,传统的交变方向法(ADM)及其线性化版本(LADM,通过对二次罚分项进行线性化获得)是针对两个块的情况,不能天真地归纳为解决多块的情况。因此,在多块情况下扩展基于ADM的方法有很大的需求。本文提出了一种具有并行分裂和自适应代价的LADM(LADMPSAP),以有效地解决多块可分离凸程序。当所有组成目标函数都具有边界次梯度时,我们获得的收敛结果要强于ADM和LADM,例如,允许惩罚参数不受约束并证明全局收敛的充分必要条件。我们进一步提出了一种简单的最优测度,并从遍历意义上揭示了LADMPSAP的收敛速度。对于具有额外凸集约束的程序,通过精确的参数估计,我们设计了实用的LADMPSAP版本以加快收敛速度​​。最后,我们也通过线性化部分目标函数来概括LADMPSAP以处理具有更困难目标函数的程序。 LADMPSAP特别适用于稀疏表示和低级别恢复问题,因为它的子问题具有封闭形式的解决方案,并且在迭代过程中可以保留迭代对象的稀疏性和低级别。它也是高度可并行化的,因此适合并行或分布式计算。数值实验证明了LADMPSAP在速度和数值精度方面的优势。

著录项

  • 作者单位
  • 年度 2014
  • 总页数
  • 原文格式 PDF
  • 正文语种 {"code":"en","name":"English","id":9}
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号